AI analytics#1339
Conversation
- Added new internal API endpoint for documentation tools, allowing actions such as listing available docs, searching, and fetching specific documentation by ID. - Updated environment configuration to support optional internal secret for enhanced security. - Refactored existing search functionality to utilize the new docs tools API instead of the previous MCP server. - Improved error handling and response parsing for documentation-related requests. - Expanded documentation to clarify the relationship between the new tools and existing API functionalities. This update streamlines the documentation access process and enhances the overall developer experience.
- Introduced error capturing for failed HTTP requests in the docs tools API, improving debugging capabilities. - Updated the API response for unsupported methods to include an 'Allow' header, clarifying the expected request type. These changes enhance the robustness of the documentation tools integration and improve developer experience.
- Updated the key name in the capabilities section of the API documentation to follow a consistent naming convention, improving clarity and maintainability.
The .gitmodules was updated in d22593d to point at apps/backend/src/private/implementation, but the gitlink entry (mode 160000) was never added to the tree. This caused `git clone --recurse-submodules` to silently skip the private submodule. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…on for docs tools - Added `STACK_DOCS_INTERNAL_BASE_URL` to backend `.env` and `.env.development` files for AI tool bundle configuration. - Removed references to `STACK_INTERNAL_DOCS_TOOLS_SECRET` from backend and docs environment files and validation logic from the docs tools API route. - Introduced a new `.env` file for the docs app with essential configuration variables.
… into llm-mcp-flow
…r improved clarity and maintainability
aadesh18
left a comment
There was a problem hiding this comment.
made changes based on comments
|
@greptileai take your time but do a proper rereview, go through all the commits |
|
Want your agent to iterate on Greptile's feedback? Try greploops. |
| body.question, | ||
| body.answer, | ||
| body.publish, | ||
| user.display_name ?? user.primary_email ?? user.id, |
There was a problem hiding this comment.
Missing
opt() wrapping for optional requestId arg
body.requestId is passed directly as a bare value to callReducerStrict, but the SpacetimeDB reducer declares it as t.string().optional() — an Option<String>. All other optional fields in this codebase are wrapped with opt() (e.g., opt(entry.projectId)), which produces the tagged-variant encoding ({ some: "…" } / { none: [] }) that SpacetimeDB's HTTP API requires.
When requestId is a non-empty string, the backend sends a bare JSON string instead of { "some": "…" }, and SpacetimeDB will reject the call — silently losing the idempotency that requestId is meant to provide.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/backend/src/app/api/latest/internal/mcp-review/add-manual/route.ts
Line: 40
Comment:
**Missing `opt()` wrapping for optional `requestId` arg**
`body.requestId` is passed directly as a bare value to `callReducerStrict`, but the SpacetimeDB reducer declares it as `t.string().optional()` — an `Option<String>`. All other optional fields in this codebase are wrapped with `opt()` (e.g., `opt(entry.projectId)`), which produces the tagged-variant encoding (`{ some: "…" }` / `{ none: [] }`) that SpacetimeDB's HTTP API requires.
When `requestId` is a non-empty string, the backend sends a bare JSON string instead of `{ "some": "…" }`, and SpacetimeDB will reject the call — silently losing the idempotency that `requestId` is meant to provide.
How can I resolve this? If you propose a fix, please make it concise.| body.correctedQuestion, | ||
| body.correctedAnswer, |
There was a problem hiding this comment.
upsert_qa_from_call and mark_human_reviewed are issued as two separate reducer calls. If the first succeeds but the second fails (network blip, SpacetimeDB 502, timeout), the QA entry is created or updated but the call log is never marked as reviewed — leaving both records in an inconsistent state with no retry or rollback path. The update-qa-entry route was consolidated into a single update_qa_entry_with_publish reducer for exactly this reason; a similar upsert_qa_from_call_and_mark_reviewed atomic reducer would fix this route as well.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/backend/src/app/api/latest/internal/mcp-review/update-correction/route.ts
Line: 40-41
Comment:
**Non-atomic two-phase mutation**
`upsert_qa_from_call` and `mark_human_reviewed` are issued as two separate reducer calls. If the first succeeds but the second fails (network blip, SpacetimeDB 502, timeout), the QA entry is created or updated but the call log is never marked as reviewed — leaving both records in an inconsistent state with no retry or rollback path. The `update-qa-entry` route was consolidated into a single `update_qa_entry_with_publish` reducer for exactly this reason; a similar `upsert_qa_from_call_and_mark_reviewed` atomic reducer would fix this route as well.
How can I resolve this? If you propose a fix, please make it concise.
This PR adds an AI analytics dashboard to the internal tool and centralises AI query observability. Every call through the /ai/query and /integrations/ai-proxy endpoints is now logged to SpacetimeDB with token counts, cost, latency, and full conversation replay. The internal tool gains a new "Unified AI Endpoint Analytics" tab backed by a new useAiQueryLogs SpacetimeDB subscription, plus reviewer enrollment and unmark-reviewed flows.
How to review this PR:
Summary by CodeRabbit
New Features
Improvements
Infrastructure